13 research outputs found

    Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

    Full text link
    In recent years the field of neuromorphic low-power systems that consume orders of magnitude less power gained significant momentum. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, efficient processing of temporal sequences or variable length-inputs remain difficult. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights and neural activities to 16 levels, and to limit fan-in to 64 inputs. We find that short synaptic delays are sufficient to implement the dynamical (temporal) aspect of the RNN in the question classification task. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ~17 uW

    The MiTAP system for monitoring reports of disease outbreak

    Get PDF
    ] was developed as an experimental prototype using human language technologies for monitoring infectious disease outbreaks and other global disasters. MiTAP is designed to provide timely multi-lingual information access to analysts, medical experts, health services, and individuals involved in humanitarian assistance and relief work. Every day, thousands of articles from hundreds of global information sources are automatically captured, filtered, translated, tagged, summarized, categorized by content, and made available to users via a news server and web-based search engine. Information extraction technology plays a critical role in many of these processes, presenting information in a variety of time-saving mechanisms to facilitate browsing, searching, sorting, and scanning of articles. Machine translation provides analysts with access to foreign language information otherwise unavailable. We have created a novel prototype by integrating MiTAP with an expert system to help analysts and public health officials deal with overwhelming amounts of data and information in the biomedical domain, specifically relating to disease outbreaks. By providing the analyst with alerts to indications of disease-related activities, the prototype attempts to detect early signs of disease outbreak in non-traditional data sources, giving the analyst more time to focus on potentially interesting data while reducing the time spent investigating false alarms and insignificant events

    Discriminating Gender on Twitter

    No full text
    Accurate prediction of demographic attributes from social media and other informal online content is valuable for marketing, personalization, and legal investigation. This paper describes the construction of a large, multilingual dataset labeled with gender, and investigates statistical models for determining the gender of uncharacterized Twitter users. We explore several different classifier types on this dataset. We show the degree to which classifier accuracy varies based on tweet volumes as well as when various kinds of profile metadata are included in the models. We also perform a large-scale human assessment using Amazon Mechanical Turk. Our methods significantly out-perform both baseline models and almost all humans on the same task.

    Using Machine Learning to Predict Trouble During Collaborative Learning

    No full text
    Abstract. Our goal is to build and evaluate a web-based, collaborative distance-learning system that will allow groups of students to interact with each other remotely and with an intelligent electronic agent that will aid them in their learning. The agent will follow the discussion and interact with the participants when it detects learning trouble. In order to recognize problems in the dialogue, we investigated conversational elements that can be utilized as predictors for effective and ineffective interaction between human students. In this paper we discuss our representation of participant dialogue and the statistical models we are using to determine the effectiveness of group interaction

    Supplementing Obesity-Related Surveillance with Persistent Health Assessment Tools

    Get PDF
    We developed Persistent Health Assessment Tools, PHAT, to equip public health policy makers with more precise tools and timely information for measuring the success of obesity prevention programs. PHAT monitors social media to supplement traditional surveillance by making real-time estimates based on observations of obesity-relevant behaviors. Specifically, we developed models for predicting obesity rates from sets of tweets and developed a dashboard to provide interactive navigation and time slicing

    Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

    Full text link
    In recent years the field of neuromorphic low-power systems gained significant momentum, spurring brain-inspired hardware systems which operate on principles that are fundamentally different from standard digital computers and thereby consume orders of magnitude less power. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, the efficient processing of temporal sequences or variable length-inputs remains difficult, partly due to challenges in representing and configuring the dynamics of spiking neural networks. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights to 16 levels, discretize the neural activities to 16 levels, and to limit fan-in to 64 inputs. Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. Furthermore we observed that the discretization of the neural activities is beneficial to our train-and-constrain approach. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of approx. 17uW
    corecore